Real-time computer graphics is the subfield of computer graphics focused on producing and analyzing images in real time. The term is most often used in reference to interactive 3D computer graphics, typically using a GPU, with video games the most noticeable users. The term can also refer to anything from rendering an application's GUI to real-time image processing and image analysis.
Although computers have been known from the beginning to be capable of generating 2D images involving simple lines, images and polygons in real-time (e.g. Bressenham's line drawing algorithm), the creation of 3D computer graphics and the speed necessary for generating fast, good quality 3D images onto a display screen has always been a daunting task for traditional Von Neumann architecture-based systems. The rest of this article concentrates on this widely-accepted aspect of real-time graphics rather than expanding on the principles of real-time 2D computer graphics.
Contents |
The goal of computer graphics is to generate a computer generated image using certain desired metrics. This image is often called a frame. How fast these images or frames are generated in a given second determines the method's real-timeliness.
One interesting aspect of real-time computer graphics is the way in which it differs from traditional off-line rendering systems (and hence, these are the non-real-time graphics systems); non-real-time graphics typically rely on ray-tracing where the expensive operation of tracing rays from the camera to the world is allowed and can take as much as hours or even days for a single frame. On the other hand, in the case of real-time graphics, the system has less than 1/30th of a second per image. In order to do that, the current systems cannot afford shooting millions or even billions of rays; instead, they rely on the technique of z-buffer triangle rasterization. In this technique, every object is decomposed into individual primitives — the most popular and common one is the triangle. These triangles are then 'drawn' or rendered onto the screen one by one. Each of these triangles get positioned, rotated and scaled on the screen and a special hardware (or in the case of an emulator, the software rasterizer) called rasterizer generates the pixels inside each of these triangles. These triangles are then decomposed into further smaller atomic units called pixels (or in computer graphics terminology, aptly called fragments) that are suitable for displaying on a display screen. The pixels are then drawn on the screen using a certain color; current systems are capable of deciding the color that results in these triangles — for e.g. a texture can be used to 'paint' onto a triangle, which is simply deciding what color to output at each pixel based on a stored picture; or in a more complex case, at each pixel, one can compute if a certain light is being seen or not resulting in very good shadows (using a technique called shadow mapping).
Thus, real-time graphics is oriented toward providing as much performance as possible for the lowest quality possible for a given class of hardware. Most video-games and simulators fall in this category of real-time graphics. As mentioned above, real-time graphics is currently possible due to the significant recent advancements in these special hardware components called graphics processing units (GPUs). These GPUs are capable of handling millions of triangles per frame and within each such triangle capable of handling millions or even billions of pixels (i.e. generating these pixel colors). Current DirectX 11/OpenGL 4.x class hardware is capable of generating complex effects on the fly (i.e. in real-time) such as shadow volumes, motion blurring, real-time triangle generation among many others. Although the gap in quality between real-time graphics and traditional off-line graphics is narrowing, the accuracy is still far below the accuracy of offline rendering .
Another interesting difference between real-time and non-real-time graphics is the interactivity desired in real-time graphics. Feedback is typically the main motivation for pushing real-time graphics to its furore. In cases like films, the director has the complete control and determinism of what has to be drawn on each frame, typically involving weeks or even years of decision-making involving a number of people.
In the case of real-time interactive computer graphics, usually a user is in control of what is about to be drawn on the display screen; the user typically uses an input device to provide feedback to the system — for example, wanting to move a character on the screen — and the system decides the next frame based on this particular instance of action. Usually the display is far slower (in terms of the number of frames per second) in responsiveness than the input device (in terms of the input device's response time measured in ms). In a way this is justified due to the immense difference between the infinitesimal response time generated by a human-being's motion and the very slow perspective speed of the human-visual system; this results in significant advancements in computer graphics, whereas the advancements in input devices typically take a much longer time to achieve the same state of fundamental advancement (e.g., the current Wii controller), as these input devices have to be extremely fast in order to be usable.
Another important factor controlling real-time computer graphics is the combination of physics and animation. These techniques largely dictate what is to be drawn on the screen — or more precisely, where to draw certain objects (deciding their position) on the screen. These techniques imitate the behavior (the temporal dimension, not the spatial dimensions) seen in real-world to a degree that is far more realistic than and compensating computer-graphics' degree of realism.